The number of standardized policy documents regarding climate policy and their publication frequency is significantly increasing. The documents are long and tedious for manual analysis, especially for policy experts, lawmakers, and citizens who lack access or domain expertise to utilize data analytics tools. Potential consequences of such a situation include reduced citizen governance and involvement in climate policies and an overall surge in analytics costs, rendering less accessibility for the public. In this work, we use a Latent Dirichlet Allocation-based pipeline for the automatic summarization and analysis of 10-years of national energy and climate plans (NECPs) for the period from 2021 to 2030, established by 27 Member States of the European Union. We focus on analyzing policy framing, the language used to describe specific issues, to detect essential nuances in the way governments frame their climate policies and achieve climate goals. The methods leverage topic modeling and clustering for the comparative analysis of policy documents across different countries. It allows for easier integration in potential user-friendly applications for the development of theories and processes of climate policy. This would further lead to better citizen governance and engagement over climate policies and public policy research.
translated by 谷歌翻译
Deep spiking neural networks (SNNs) offer the promise of low-power artificial intelligence. However, training deep SNNs from scratch or converting deep artificial neural networks to SNNs without loss of performance has been a challenge. Here we propose an exact mapping from a network with Rectified Linear Units (ReLUs) to an SNN that fires exactly one spike per neuron. For our constructive proof, we assume that an arbitrary multi-layer ReLU network with or without convolutional layers, batch normalization and max pooling layers was trained to high performance on some training set. Furthermore, we assume that we have access to a representative example of input data used during training and to the exact parameters (weights and biases) of the trained ReLU network. The mapping from deep ReLU networks to SNNs causes zero percent drop in accuracy on CIFAR10, CIFAR100 and the ImageNet-like data sets Places365 and PASS. More generally our work shows that an arbitrary deep ReLU network can be replaced by an energy-efficient single-spike neural network without any loss of performance.
translated by 谷歌翻译
视觉奇数任务被认为是对人类的普遍独立的分析智能测试。人工智能的进步导致了重要的突破,但是与人类在此类分析智能任务上竞争仍然具有挑战性,并且通常诉诸于非生物学上的架构。我们提出了一个具有生物学现实的系统,该系统从合成眼动运动中接收输入 - 扫视,并与结合新皮质神经元动力学的神经元一起处理它们。我们介绍了一个程序生成的视觉奇数数据集,以训练扩展常规关系网络和我们建议的系统的体系结构。两种方法都超过了人类的准确性,我们发现两者都具有相同的基本推理基本机制。最后,我们表明,具有生物学启发的网络可实现卓越的准确性,学习速度更快,所需的参数比常规网络更少。
translated by 谷歌翻译
通过归纳逻辑编程(ILP)综合大型逻辑程序通常需要中间定义。但是,用强化谓词混乱假设空间通常会降低性能。相比之下,梯度下降提供了一种有效的方法来在此类高维空间中找到溶液。到目前为止,神经符号ILP方法尚未完全利用这一点。我们提出了一种基于ILP的合成方法,该方法受益于大规模谓词发明,利用了高维梯度下降的功效。我们发现包含十个辅助定义以上的符号解决方案。这超出了现有的神经符号ILP系统的成就,因此构成了该领域的里程碑。
translated by 谷歌翻译
为了关注稳定的室友(SR)实例,我们为进行稳定匹配问题的实验的工具箱做出了贡献。我们引入了一个多项式时间可计算的伪计,以测量SR实例的相似性,分析其属性并使用它来创建SR实例的地图。该地图可视化460个合成SR实例(每个统计培养物之一中的一个采样),如下所示:每个实例都是平面中的一个点,如果相应的SR实例彼此相似,则在地图上有两个点接近。随后,我们进行了几个模范实验,并在地图上描述了它们的结果,说明了地图作为非聚集可视化工具的有用性,生成的数据集的多样性以及使用从不同统计文化中采样的实例。最后,为了证明我们的框架也可以用于偏爱的其他匹配问题,我们创建和分析了稳定的婚姻实例地图。
translated by 谷歌翻译
我们引入了一种内部重播的新方法,该方法根据网络深度调节排练的频率。虽然重播策略减轻了神经网络中灾难性遗忘的影响,但最近对生成重播的作品表明,仅在网络的更深层次上进行排练才能改善持续学习的性能。但是,生成方法引入了其他计算开销,从而限制了其应用程序。通过观察到的神经网络的早期层次忘记忘记了,我们建议在重播过程中使用中级功能更新频率不同的网络层。这通过省略了发电机的更深层和主要模型的早期层来减少计算负担。我们命名我们的方法渐进式潜在重播,并表明它在使用较少的资源时表现优于内部重播。
translated by 谷歌翻译
我们假设,由于多模式深神经网络中学习的贪婪本质,这些模型倾向于仅依靠一种模式,同时又构成了其他模式。这种行为是违反直觉的,并损害了模型的概括,正如我们从经验上观察到的。为了估计模型对每种模式的依赖性,我们计算模型还可以访问其他模式时的准确性增益。我们将此增益称为条件利用率。在实验中,我们始终观察到模式之间,多个任务和体系结构之间的条件利用率不平衡。由于在训练过程中无法有效地计算条件利用率,因此我们根据模型从每种模式中学习的速度引入代理,我们将其称为条件学习速度。我们提出了一种算法,以平衡训练过程中模式之间的有条件学习速度,并证明它确实解决了贪婪学习的问题。提出的算法改善了模型在三个数据集上的概括:彩色MNIST,ModelNet40和Nvidia Dynamic Hand手势。
translated by 谷歌翻译
通过归纳逻辑编程(ILP)学习复杂程序仍然是一个强大的挑战。现有的高阶启用的ILP系统显示出改善的准确性和学习性能,但仍然受到潜在学习机制的局限性的局限性。实验结果表明,我们通过高阶定义从失败范式的多功能学习的延伸显着提高了现有系统所需的繁重人类指导的学习表现。此外,我们提供了一个理论框架,捕获我们的扩展名处理的高阶定义类。
translated by 谷歌翻译
自动语音识别(ASR)是一种能力,使程序能够将人类演讲进入书面形式。人工智能(AI)的最新发展导致基于深神经网络的高精度ASR系统,例如经常性神经网络传感器(RNN-T)。然而,这些方法的核心组件和所执行的操作从强大的生物对应,即人脑中脱离。另一方面,基于尖刺神经网络(SNNS)的生物启发模型中的当前发展,落后于准确性并主要关注小规模应用。在这项工作中,我们通过从大脑中发现的多样性神经和突触动态吸引灵感来重新审视生物学上可合理的模型并大大提高他们的能力。特别是,我们介绍了模拟轴体和轴突突触的神经连接概念。基于此,我们提出了具有丰富神经突触动态的新型深度学习单元,并将它们集成到RNN-T架构中。我们首次展示,与现有的深度学习模型相比,大规模ASR模型的生物学现实实际实施可以产生竞争性能水平。具体地,我们表明这种实现具有若干优点,例如降低的计算成本和更低的延迟,这对于语音识别应用至关重要。
translated by 谷歌翻译
We examine the role of memorization in deep learning, drawing connections to capacity, generalization, and adversarial robustness. While deep networks are capable of memorizing noise data, our results suggest that they tend to prioritize learning simple patterns first. In our experiments, we expose qualitative differences in gradient-based optimization of deep neural networks (DNNs) on noise vs. real data. We also demonstrate that for appropriately tuned explicit regularization (e.g., dropout) we can degrade DNN training performance on noise datasets without compromising generalization on real data. Our analysis suggests that the notions of effective capacity which are dataset independent are unlikely to explain the generalization performance of deep networks when trained with gradient based methods because training data itself plays an important role in determining the degree of memorization.
translated by 谷歌翻译